Linear regression

Example of linear regression with one independent variable.

In statistics, linear regression is any approach to modeling the relationship between a scalar variable y and one or more variables denoted X. In linear regression, models of the unknown parameters are estimated from the data using linear functions. Such models are called “linear models.” Most commonly, linear regression refers to a model in which the conditional mean of y given the value of X is an affine function of X. Less commonly, linear regression could refer to a model in which the median, or some other quantile of the conditional distribution of y given X is expressed as a linear function of X. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of y given X, rather than on the joint probability distribution of y and X, which is the domain of multivariate analysis.

Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.

Linear regression has many practical uses. Most applications of linear regression fall into one of the following two broad categories:

Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the “lack of fit” in some other norm, or by minimizing a penalized version of the least squares loss function as in ridge regression. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, while the terms “least squares” and linear model are closely linked, they are not synonymous.

Contents

Introduction to linear regression

Given a data set \{y_i,\, x_{i1}, \ldots, x_{ip}\}_{i=1}^n of n statistical units, a linear regression model assumes that the relationship between the dependent variable yi and the p-vector of regressors xi is approximately linear. This approximate relationship is modeled through a so-called “disturbance term” εi — an unobserved random variable that adds noise to the linear relationship between the dependent variable and regressors. Thus the model takes form


 y_i = \beta_1 x_{i1} + \cdots + \beta_p x_{ip} + \varepsilon_i
 = x'_i\beta + \varepsilon_i,
 \qquad i = 1, \ldots, n,

where ′ denotes the transpose, so that xiβ is the inner product between vectors xi and β.

Often these n equations are stacked together and written in vector form as


 y = X\beta + \varepsilon, \,

where


 y = \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix}, \quad
 X = \begin{pmatrix} x'_1 \\ x'_2 \\ \vdots \\ x'_n \end{pmatrix}
 = \begin{pmatrix} x_{11} & \cdots & x_{1p} \\
 x_{21} & \cdots & x_{2p} \\
 \vdots & \ddots & \vdots \\
 x_{n1} & \cdots & x_{np}
 \end{pmatrix}, \quad
 \beta = \begin{pmatrix} \beta_1 \\ \vdots \\ \beta_p \end{pmatrix}, \quad
 \varepsilon = \begin{pmatrix} \varepsilon_1 \\ \varepsilon_2 \\ \vdots \\ \varepsilon_n \end{pmatrix}.

Some remarks on terminology and general use:

Example. Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti. Physics tells us that, ignoring the drag, the relationship can be modeled as


 h_i = \beta_1 t_i + \beta_2 t_i^2 + \varepsilon_i,

where β1 determines the initial velocity of the ball, β2 is proportional to the standard gravity, and εi is due to measurement errors. Linear regression can be used to estimate the values of β1 and β2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters β1 and β2; if we take regressors xi = (xi1, xi2)  = (ti, ti2), the model takes on the standard form


 h_i = x'_i\beta + \varepsilon_i.

Assumptions

Two key assumptions are common to all estimation methods used in linear regression analysis:

A simpler statement of this is that there must be enough data available compared to the number of parameters to be estimated. If there is too little data, then you end up with a system of equations with no unique solution. See partial least squares regression.

Beyond these two assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods:

Interpretation

A fitted linear regression model can be used to identify the relationship between a single predictor variable xj and the response variable y when all the other predictor variables in the model are “held fixed”. Specifically, the interpretation of βj is the expected change in y for a one-unit change in xj when the other covariates are held fixed. This is sometimes called the unique effect of xj on y. In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating xj to y.

Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such as dummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to “hold ti fixed” and at the same time change the value of ti2).

It is possible that the unique effect can be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information in xj, so that once that variable is in the model, there is no contribution of xj to the variation in y. Conversely, the unique effect of xj can be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation of y, but they mainly explain variation in a way that is complementary to what is captured by xj. In this case, including the other variables in the model reduces the part of the variability of y that is unrelated to xj, thereby strengthening the apparent relationship with xj.

The meaning of the expression “held fixed” may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been “held fixed” by the experimenter. Alternatively, the expression “held fixed” can refer to a selection that takes place in the context of data analysis. In this case, we “hold a variable fixed” by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of “held fixed” that can be used in an observational study.

The notion of a “unique effect” is appealing when studying a complex system where multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.[5]

Estimation methods

Numerous procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency.

Some of the more common estimation techniques for linear regression are summarized below.

Extensions

Applications of linear regression

Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.

Trend line

For trend lines as used in technical analysis, see Trend lines (technical analysis)

A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.

Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.

Epidemiology

Early evidence relating tobacco smoking to mortality and morbidity came from observational studies employing regression analysis. In order to reduce spurious correlations when analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, suppose we have a regression model in which cigarette smoking is the independent variable of interest, and the dependent variable is lifespan measured in years. Researchers might include socio-economic status as an additional dependent variable, to ensure that any observed effect of smoking on lifespan is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such as instrumental variables regression may be used to attempt to estimate causal relationships from observational data.

Finance

The capital asset pricing model uses linear regression as well as the concept of Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.

Regression may not be the appropriate way to estimate beta in finance given that it is supposed to provide the volatility of an investment relative to the volatility of the market as a whole. This would require that both these variables be treated in the same way when estimating the slope. Whereas regression treats all variability as being in the investment returns variable, i.e. it only considers residuals in the dependent variable.[19]

Environmental science

Linear regression finds application in a wide range of environmental science applications. In Canada, the Environmental Effects Monitoring Program uses statistical analyses on fish and benthic surveys to measure the effects of pulp mill or metal mine effluent on the aquatic ecosystem.[20]

See also

Further reading

Notes

  1. 1.0 1.1 Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso". Journal of the Royal Statistical Society. Series B (Methodological) 58 (1): 267–288. http://www.jstor.org/stable/2346178. 
  2. 2.0 2.1 Efron, Bradley; Hastie,Trevor; Johnstone,Iain Johnstone;Tibshirani,Robert (2004). "Least Angle Regression". The Annals of Statistics 32 (2): 407–451. doi:10.1214/009053604000000067. http://www.jstor.org/stable/3448465. 
  3. 3.0 3.1 Hawkins, Douglas M. (1973). "On the Investigation of Alternative Regressions by Principal Component Analysis". Journal of the Royal Statistical Society. Series C (Applied Statistics) 22 (3): 275–286. http://www.jstor.org/stable/2346776. 
  4. 4.0 4.1 Jolliffe, Ian T. (1982). "A Note on the Use of Principal Components in Regression". Journal of the Royal Statistical Society. Series C (Applied Statistics) 31 (3): 300–303. http://www.jstor.org/stable/2348005. 
  5. Berk, Richard A.. Regression Analysis: A Constructive Critique. Sage. doi:10.1177/0734016807304871. 
  6. Lai, T.L.; Robbins,H; Wei, C.Z. (1978). "Strong consistency of least squares estimates in multiple regression". Proceedings of the National Academy of Sciences USA 75 (7). 
  7. del Pino, Guido (1989). "The Unifying Role of Iterative Generalized Least Squares in Statistical Algorithms". Statistical Science 4 (4): 394–403. doi:10.1214/ss/1177012408. http://www.jstor.org/stable/2245853. 
  8. Carroll, Raymond J. (1982). "Adapting for Heteroscedasticity in Linear Models". The Annals of Statistics 10 (4): 1224–1233. doi:10.1214/aos/1176345987. http://www.jstor.org/stable/2240725. 
  9. Cohen, Michael; Dalal, Siddhartha R.; Tukey,John W. (1993). "Robust, Smoothly Heterogeneous Variance Regression". Journal of the Royal Statistical Society. Series C (Applied Statistics) 42 (2): 339–353. http://www.jstor.org/stable/2986237. 
  10. Narula, Subhash C.; Wellington, John F. (1982). "The Minimum Sum of Absolute Errors Regression: A State of the Art Survey". International Statistical Review 50 (3): 317–326. doi:10.2307/1402501. http://www.jstor.org/stable/1402501. 
  11. Lange, Kenneth L.; Little, Roderick J. A.; Taylor,Jeremy M. G. (1989). "Robust Statistical Modeling Using the t Distribution". Journal of the American Statistical Association 84 (408): 881–896. doi:10.2307/2290063. http://www.jstor.org/stable/2290063. 
  12. Stone, C. J. (1975). "Adaptive maximum likelihood estimators of a location parameter". The Annals of Statistics 3 (2): 267–284. doi:10.1214/aos/1176343056. http://www.jstor.org/stable/2958945. 
  13. Goldstein, H. (1986). "Multilevel Mixed Linear Model Analysis Using Iterative Generalized Least Squares". Biometrika 73 (1): 43–56. doi:10.1093/biomet/73.1.43. http://www.jstor.org/stable/2336270. 
  14. Nievergelt, Yves (1994). "Total Least Squares: State-of-the-Art Regression in Numerical Analysis". SIAM Review 36 (2): 258–264. doi:10.1137/1036055. http://www.jstor.org/stable/2132463. 
  15. Swindel, Benee F. (1981). "Geometry of Ridge Regression Illustrated". The American Statistician 35 (1): 12–15. doi:10.2307/2683577. http://www.jstor.org/stable/2683577. 
  16. Draper, Norman R.; van Nostrand,R. Craig (1979). "Ridge Regression and James-Stein Estimation: Review and Comments". Technometrics 21 (4): 451–466. doi:10.2307/1268284. http://www.jstor.org/stable/1268284. 
  17. Hoerl, Arthur E.; Kennard,Robert W.; Hoerl,Roger W. (1985). "Practical Use of Ridge Regression: A Challenge Met". Journal of the Royal Statistical Society. Series C (Applied Statistics) 34 (2): 114–120. http://www.jstor.org/stable/2347363. 
  18. Brillinger, David R. (1977). "The Identification of a Particular Nonlinear Time Series System". Biometrika 64 (3): 509–515. doi:10.1093/biomet/64.3.509. http://www.jstor.org/stable/2345326. 
  19. Tofallis, C. (2008). "Investment Volatility: A Critique of Standard Beta Estimation and a Simple Way Forward". European Journal of Operational Research 187: 1358. doi:10.1016/j.ejor.2006.09.018. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1076742. 
  20. EEMP webpage

References

External links